On the optimality of conditional expectation as a Bregman predictor
نویسندگان
چکیده
Given a probability space (Ω,F , P ), a F -measurable random variable X , and a sub-σ-algebra G ⊂ F , it is well known that the conditional expectation E[X|G] is the optimal L-predictor (also known as the least mean square error predictor) of X among all the G-measurable random variables [8, 11]. In this paper, we provide necessary and sufficient conditions under which the conditional expectation is the unique optimal predictor. We show that E[X|G] is the optimal predictor for all Bregman Loss Functions (BLFs), of which L loss function is a special case. Moreover, under mild conditions, we show that BLFs are exhaustive. Namely, if the inÞmum of E[F (X,Y )] over all the G-measurable random variables Y and for any variable X is attained at the conditional expectation E[X|G], then F is a BLF.
منابع مشابه
CONDITIONAL EXPECTATION IN THE KOPKA'S D-POSETS
The notion of a $D$-poset was introduced in a connection withquantum mechanical models. In this paper, we introduce theconditional expectation of random variables on theK^{o}pka's $D$-Poset and prove the basic properties ofconditional expectation on this structure.
متن کاملEfficient Simulation of a Random Knockout Tournament
We consider the problem of using simulation to efficiently estimate the win probabilities for participants in a general random knockout tournament. Both of our proposed estimators, one based on the notion of “observed survivals” and the other based on conditional expectation and post-stratification, are highly effective in terms of variance reduction when compared to the raw simulation estimato...
متن کاملSome algebraic properties of Lambert Multipliers on $L^2$ spaces
In this paper, we determine the structure of the space of multipliers of the range of a composition operator $C_varphi$ that induces by the conditional expectation between two $L^p(Sigma)$ spaces.
متن کاملMean-Squared Error Analysis of Kernel Regression Estimator for Time Series
Because of a lack of a priori information, the minimum mean-squared error predictor, the conditional expectation, is often not known for a non-Gaussian time series. We show that the nonparametric kernel regression estimator of the conditional expectation is mean-squared consistent for a time series: When used as a predictor, the estimator asymptotically matches the mean-squared error produced b...
متن کاملTraining Conditional Random Fields with Natural Gradient Descent
We propose a novel parameter estimation procedure that works efficiently for conditional random fields (CRF). This algorithm is an extension to the maximum likelihood estimation (MLE), using loss functions defined by Bregman divergences which measure the proximity between the model expectation and the empirical mean of the feature vectors. This leads to a flexible training framework from which ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- IEEE Trans. Information Theory
دوره 51 شماره
صفحات -
تاریخ انتشار 2005